Inferring Brain Networks through Graphical Models with Hidden Variables

نویسندگان

  • Justin Dauwels
  • Hang Yu
  • Xueou Wang
  • François B. Vialatte
  • Charles-François Vincent Latchoumane
  • Jaeseung Jeong
  • Andrzej Cichocki
چکیده

Inferring the interactions between different brain areas is an important step towards understanding brain activity. Most often, signals can only be measured from some specific brain areas (e.g., cortex in the case of scalp electroencephalograms). However, those signals may be affected by brain areas from which no measurements are available (e.g., deeper areas such as hippocampus). In this paper, the latter are described as hidden variables in a graphical model; such model quantifies the statistical structure in the neural recordings, conditioned on hidden variables, which are inferred in an automated fashion from the data. As an illustration, electroencephalograms (EEG) of Alzheimer’s disease patients are considered. It is shown that the number of hidden variables in AD EEG is not significantly different from healthy EEG. However, there are fewer interactions between the brain areas, conditioned on those hidden variables. Explanations for these observations are suggested.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Graphical Models and Exponential Families

We provide a classi cation of graphical models according to their representation as subfamilies of exponential families. Undirected graphical models with no hidden variables are linear exponential families (LEFs), directed acyclic graphical models and chain graphs with no hidden variables, including Bayesian networks with several families of local distributions, are curved exponential families ...

متن کامل

Inferring sparse Gaussian graphical models with latent structure

Our concern is selecting the concentrationmatrix’s nonzero coefficients for a sparse Gaussian graphical model in a high-dimensional setting. This corresponds to estimating the graph of conditional dependencies between the variables.We describe a novel framework taking into account a latent structure on the concentration matrix. This latent structure is used to drive a penalty matrix and thus to...

متن کامل

Learning Hidden Variables in Probabilistic Graphical Models

In the past decades, a great deal of research has focused on learning probabilistic graphical models from data. A serious problem in learning such models is the presence of hidden, or latent, variables. These variables are not observed, yet their interaction with the observed variables has important consequences in terms of representation, inference and prediction. Consequently, numerous works ...

متن کامل

Asymptotic Model Selection and Identifiability of Directed Tree Models with Hidden Variables

The standard Bayesian Information Criterion (BIC) is derived under some regularity conditions which are not always satisfied by the graphical models with hidden variables. In this paper we derive the BIC score for Bayesian networks in the case when the data is binary and the underlying graph is a rooted tree and all the inner nodes represent hidden variables. This provides a direct generalizati...

متن کامل

State estimation in discrete graphical models

p(X1:D|G, θ) (1) whereG is the graph structure (either directed or undirected or both), and θ are the parameters. In Bayesian modeling, we treat the parameters as random variables as well, but they are in turn conditioned on fixed hyper parameters α: p(X1:D, θ|G,α) (2) Clearly this can be represented as in Equation 1 by appropriately redefining X and θ. It will also be notationally helpful to d...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011